std 0
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (6 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
34adeb8e3242824038aa65460a47c29e-Supplemental.pdf
For notation simplicity, GNNF here is considered in GNTK format. The weights ofF, φ is i.i.d. Let ha,bi denote inner-product of vectoraandb. Next,weexperimentally verify the necessity of training locally specialized NeighGen. Specifically, we present theL2 distance between the averaged feature distributions of neighborhoods from these three types of graphs to show how the NeighGen generated missing neighborsnarrowthegap.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.27)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (6 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Deep Survival Analysis for Competing Risk Modeling with Functional Covariates and Missing Data Imputation
Gao, Penglei, Zou, Yan, Duggal, Abhijit, Huang, Shuaiqi, Liang, Faming, Wang, Xiaofeng
We introduce the Functional Competing Risk Net (FCRN), a unified deep-learning framework for discrete-time survival analysis under competing risks, which seamlessly integrates functional covariates and handles missing data within an end-to-end model. By combining a micro-network Basis Layer for functional data representation with a gradient-based imputation module, FCRN simultaneously learns to impute missing values and predict event-specific hazards. Evaluated on multiple simulated datasets and a real-world ICU case study using the MIMIC-IV and Cleveland Clinic datasets, FCRN demonstrates substantial improvements in prediction accuracy over random survival forests and traditional competing risks models. This approach advances prognostic modeling in critical care by more effectively capturing dynamic risk factors and static predictors while accommodating irregular and incomplete data.
- North America > United States > Ohio > Cuyahoga County > Cleveland (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
Predicting Market Troughs: A Machine Learning Approach with Causal Interpretation
Rao, Peilin, Rojas, Randall R.
This paper provides robust, new evidence on the causal drivers of market troughs. We demonstrate that conclusions about these triggers are critically sensitive to model specification, moving beyond restrictive linear models with a flexible DML average partial effect causal machine learning framework. Our robust estimates identify the volatility of options-implied risk appetite and market liquidity as key causal drivers, relationships misrepresented or obscured by simpler models. These findings provide high-frequency empirical support for intermediary asset pricing theories. This causal analysis is enabled by a high-performance nowcasting model that accurately identifies capitulation events in real-time.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Economy (1.00)
A Appendix A.1 Proof of Theorem
"friendship" and "message"), the result is extended trivially (through with more (l 1) (l 1) ( l 1) Algorithms 3 and 4 show how to extend KTN . Using the minimum length of meta-paths is enough for KTN. We also present the results with error bars on OAG-computer networks and OAG-machine learning in Tables 6 and 7, respectively. KTN consistently outperforms all baselines. These reversed results are a consequence of HGNN's unique feature extractors On the other hand, DAN and JAN define a loss in terms of higher-order MMD between source and target features.
A FedSage Algorithm
Referring to Section 4.3, FedSage+ includes two phases. We describe the aggregation operation below. We are going to define the kernel matrix of two nodes u,v V as follows. B.1 needs to calculate 1) a covariance matrix In Graphsage, this is equivalent to having K graph convectional layers. 's derivative is denoted as The generalization ability in the NTK regime and depends on the kernel matrix.
code_transformed: The Influence of Large Language Models on Code
Xu, Yuliang, Huang, Siming, Geng, Mingmeng, Wan, Yao, Shi, Xuanhua, Chen, Dongping
Coding remains one of the most fundamental modes of interaction between humans and machines. With the rapid advancement of Large Language Models (LLMs), code generation capabilities have begun to significantly reshape programming practices. This development prompts a central question: Have LLMs transformed code style, and how can such transformation be characterized? In this paper, we present a pioneering study that investigates the impact of LLMs on code style, with a focus on naming conventions, complexity, maintainability, and similarity. By analyzing code from over 19,000 GitHub repositories linked to arXiv papers published between 2020 and 2025, we identify measurable trends in the evolution of coding style that align with characteristics of LLM-generated code. For instance, the proportion of snake\_case variable names in Python code increased from 47% in Q1 2023 to 51% in Q1 2025. Furthermore, we investigate how LLMs approach algorithmic problems by examining their reasoning processes. Given the diversity of LLMs and usage scenarios, among other factors, it is difficult or even impossible to precisely estimate the proportion of code generated or assisted by LLMs. Our experimental results provide the first large-scale empirical evidence that LLMs affect real-world programming style.
GHOST: Gaussian Hypothesis Open-Set Technique
Rabinowitz, Ryan, Cruz, Steve, Günther, Manuel, Boult, Terrance E.
Evaluations of large-scale recognition methods typically focus on overall performance. While this approach is common, it often fails to provide insights into performance across individual classes, which can lead to fairness issues and misrepresentation. Addressing these gaps is crucial for accurately assessing how well methods handle novel or unseen classes and ensuring a fair evaluation. To address fairness in Open-Set Recognition (OSR), we demonstrate that per-class performance can vary dramatically. We introduce Gaussian Hypothesis Open Set Technique (GHOST), a novel hyperparameter-free algorithm that models deep features using class-wise multivariate Gaussian distributions with diagonal covariance matrices. We apply Z-score normalization to logits to mitigate the impact of feature magnitudes that deviate from the model's expectations, thereby reducing the likelihood of the network assigning a high score to an unknown sample. We evaluate GHOST across multiple ImageNet-1K pre-trained deep networks and test it with four different unknown datasets. Using standard metrics such as AUOSCR, AUROC and FPR95, we achieve statistically significant improvements, advancing the state-of-the-art in large-scale OSR. Source code is provided online.
- North America > United States > Colorado > El Paso County > Colorado Springs (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- Research Report > Experimental Study (0.69)
- Research Report > New Finding (0.46)